| Name | Version | Summary | date |
|---|---|---|---|
| haliosai | 1.0.5 | Advanced Guardrails and Evaluation SDK for AI Agents | 2025-10-23 18:05:33 |
| PrivacySherlock | 0.0.1 | A Python package for PII detection and classification | 2024-09-12 19:10:04 |
| llm-guard | 0.3.15 | LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure. | 2024-08-22 19:39:48 |
| last_layer | 0.1.32 | Ultra-fast, Low Latency LLM security solution | 2024-04-05 12:38:46 |
| hour | day | week | total |
|---|---|---|---|
| 96 | 1614 | 9962 | 331791 |